Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
1.
Sensors (Basel) ; 23(4)2023 Feb 05.
Article in English | MEDLINE | ID: covidwho-2286238

ABSTRACT

With the global spread of the novel coronavirus, avoiding human-to-human contact has become an effective way to cut off the spread of the virus. Therefore, contactless gesture recognition becomes an effective means to reduce the risk of contact infection in outbreak prevention and control. However, the recognition of everyday behavioral sign language of a certain population of deaf people presents a challenge to sensing technology. Ubiquitous acoustics offer new ideas on how to perceive everyday behavior. The advantages of a low sampling rate, slow propagation speed, and easy access to the equipment have led to the widespread use of acoustic signal-based gesture recognition sensing technology. Therefore, this paper proposed a contactless gesture and sign language behavior sensing method based on ultrasonic signals-UltrasonicGS. The method used Generative Adversarial Network (GAN)-based data augmentation techniques to expand the dataset without human intervention and improve the performance of the behavior recognition model. In addition, to solve the problem of inconsistent length and difficult alignment of input and output sequences of continuous gestures and sign language gestures, we added the Connectionist Temporal Classification (CTC) algorithm after the CRNN network. Additionally, the architecture can achieve better recognition of sign language behaviors of certain people, filling the gap of acoustic-based perception of Chinese sign language. We have conducted extensive experiments and evaluations of UltrasonicGS in a variety of real scenarios. The experimental results showed that UltrasonicGS achieved a combined recognition rate of 98.8% for 15 single gestures and an average correct recognition rate of 92.4% and 86.3% for six sets of continuous gestures and sign language gestures, respectively. As a result, our proposed method provided a low-cost and highly robust solution for avoiding human-to-human contact.


Subject(s)
COVID-19 , Ultrasonics , Humans , Gestures , Sign Language , Acoustics
2.
Anal Chem ; 95(15): 6253-6260, 2023 04 18.
Article in English | MEDLINE | ID: covidwho-2286104

ABSTRACT

Acoustic mixing of droplets is a promising way to implement biosensors that combine high speed and minimal reagent consumption. To date, this type of droplet mixing is driven by a volume force resulting from the absorption of high-frequency acoustic waves in the bulk of the fluid. Here, we show that the speed of these sensors is limited by the slow advection of analyte to the sensor surface due to the formation of a hydrodynamic boundary layer. We eliminate this hydrodynamic boundary layer by using much lower ultrasonic frequencies to excite the droplet, which drives a Rayleigh streaming that behaves essentially like a slip velocity. At equal average flow velocity in the droplet, both experiment and three-dimensional simulations show that this provides a three-fold speedup compared to Eckart streaming. Experimentally, we further shorten a SARS-CoV-2 antibody immunoassay from 20 min to 40 s taking advantage of Rayleigh acoustic streaming.


Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , SARS-CoV-2 , Acoustics , Ultrasonics , Immunoassay
3.
Sci Rep ; 13(1): 4631, 2023 03 21.
Article in English | MEDLINE | ID: covidwho-2278476

ABSTRACT

The extraordinary circumstances of the COVID-19 pandemic led to measures to mitigate the spread of the disease, with lockdowns and mobility restrictions at national and international levels. These measures led to sudden and sometimes dramatic reductions in human activity, including significant reductions in ship traffic in the maritime sector. We report on a reduction of deep-ocean acoustic noise in three ocean basins in 2020, based on data acquired by hydroacoustic stations in the International Monitoring System of the Comprehensive Nuclear-Test-Ban Treaty. The noise levels measured in 2020 are compared with predicted levels obtained from modelling data from previous years using Gaussian Process regression. Comparison of the predictions with measured data for 2020 shows reductions of between 1 and 3 dB in the frequency range from 10 to 100 Hz for all but one of the stations.


Subject(s)
Acoustics , COVID-19 , Geographic Mapping , Noise , Oceans and Seas , COVID-19/epidemiology , Human Activities/statistics & numerical data , Ships/statistics & numerical data , Regression Analysis , Islands , Ecosystem , Noise, Transportation/statistics & numerical data
4.
IEEE J Transl Eng Health Med ; 11: 199-210, 2023.
Article in English | MEDLINE | ID: covidwho-2254789

ABSTRACT

BACKGROUND: The COVID-19 pandemic has highlighted the need to invent alternative respiratory health diagnosis methodologies which provide improvement with respect to time, cost, physical distancing and detection performance. In this context, identifying acoustic bio-markers of respiratory diseases has received renewed interest. OBJECTIVE: In this paper, we aim to design COVID-19 diagnostics based on analyzing the acoustics and symptoms data. Towards this, the data is composed of cough, breathing, and speech signals, and health symptoms record, collected using a web-application over a period of twenty months. METHODS: We investigate the use of time-frequency features for acoustic signals and binary features for encoding different health symptoms. We experiment with use of classifiers like logistic regression, support vector machines and long-short term memory (LSTM) network models on the acoustic data, while decision tree models are proposed for the symptoms data. RESULTS: We show that a multi-modal integration of inference from different acoustic signal categories and symptoms achieves an area-under-curve (AUC) of 96.3%, a statistically significant improvement when compared against any individual modality ([Formula: see text]). Experimentation with different feature representations suggests that the mel-spectrogram acoustic features performs relatively better across the three kinds of acoustic signals. Further, a score analysis with data recorded from newer SARS-CoV-2 variants highlights the generalization ability of the proposed diagnostic approach for COVID-19 detection. CONCLUSION: The proposed method shows a promising direction for COVID-19 detection using a multi-modal dataset, while generalizing to new COVID variants.


Subject(s)
COVID-19 , Humans , Pandemics , SARS-CoV-2 , Acoustics , COVID-19 Testing
5.
J Acoust Soc Am ; 153(2): 1204, 2023 02.
Article in English | MEDLINE | ID: covidwho-2253512

ABSTRACT

The intensive use of personal protective equipment often requires increasing voice intensity, with possible development of voice disorders. This paper exploits machine learning approaches to investigate the impact of different types of masks on sustained vowels /a/, /i/, and /u/ and the sequence /a'jw/ inside a standardized sentence. Both objective acoustical parameters and subjective ratings were used for statistical analysis, multiple comparisons, and in multivariate machine learning classification experiments. Significant differences were found between mask+shield configuration and no-mask and between mask and mask+shield conditions. Power spectral density decreases with statistical significance above 1.5 kHz when wearing masks. Subjective ratings confirmed increasing discomfort from no-mask condition to protective masks and shield. Machine learning techniques proved that masks alter voice production: in a multiclass experiment, random forest (RF) models were able to distinguish amongst seven masks conditions with up to 94% validation accuracy, separating masked from unmasked conditions with up to 100% validation accuracy and detecting the shield presence with up to 86% validation accuracy. Moreover, an RF classifier allowed distinguishing male from female subject in masked conditions with 100% validation accuracy. Combining acoustic and perceptual analysis represents a robust approach to characterize masks configurations and quantify the corresponding level of discomfort.


Subject(s)
COVID-19 , Female , Male , Humans , Acoustics , Machine Learning , Personal Protective Equipment , Random Forest
6.
Folia Phoniatr Logop ; 74(5): 335-344, 2022.
Article in English | MEDLINE | ID: covidwho-2262962

ABSTRACT

INTRODUCTION: Voice diagnostics including voice range profile (VRP) measurement and acoustic voice analysis is essential in laryngology and phoniatrics. Due to COVID-19 pandemic, wearing of 2 or 3 filtering face piece (FFP2/3) masks is recommended when high-risk aerosol-generating procedures like singing and speaking are being performed. Goal of this study was to compare VRP parameters when performed without and with FFP2/3 masks. Further, formant analysis for sustained vowels, singer's formant, and analysis of reading standard text samples were performed without/with FFP2/3 masks. METHODS: Twenty subjects (6 males and 14 females) were enrolled in this study with an average age of 36 ± 16 years (mean ± SD). Fourteen patients were rated as euphonic/not hoarse and 6 patients as mildly hoarse. All subjects underwent the VRP measurements, vowel, and text recordings without/with FFP2/3 mask using the software DiVAS by XION medical (Berlin, Germany). Voice range of singing voice, equivalent of voice extension measure (eVEM), fundamental frequency (F0), sound pressure level (SPL) of soft speaking and shouting were calculated and analyzed. Maximum phonation time (MPT) and jitter-% were included for Dysphonia Severity Index (DSI) measurement. Analyses of singer's formant were performed. Spectral analyses of sustained vowels /a:/, /i:/, and /u:/ (first = F1 and second = F2 formants), intensity of long-term average spectrum, and alpha-ratio were calculated using the freeware praat. RESULTS: For all subjects, the mean values of routine voice parameters without/with mask were analyzed: no significant differences were found in results of singing voice range, eVEM, SPL, and frequency of soft speaking/shouting, except significantly lower mean SPL of shouting with FFP2/3 mask, in particular that of the female subjects (p = 0.002). Results of MPT, jitter, and DSI without/with FFP2/3 mask showed no significant differences. Further mean values analyzed without/with mask were ratio singer's formant/loud singing, with lower ratio with FFP2/3 mask (p = 0.001), and F1 and F2 of /a:/, /i:/, /u:/, with no significant differences of the results, with the exception of F2 of /i:/ with lower value with FFP2/3 mask (p = 0.005). With the exceptions mentioned, the t test revealed no significant differences for each of the routine parameters tested in the recordings without and with wearing a FFP2/3 mask. CONCLUSION: It can be concluded that VRP measurements including DSI performed with FFP2/3 masks provide reliable data in clinical routine with respect to voice condition/constitution. Spectral analyses of sustained vowel, text, and singer's formant will be affected by wearing FFP2/3 masks.


Subject(s)
Acoustics , Masks , Voice , Adult , COVID-19 , COVID-19 Testing , Female , Humans , Male , Middle Aged , Pandemics , Phonation , Speech Acoustics , Young Adult
7.
Int J Environ Res Public Health ; 20(4)2023 Feb 19.
Article in English | MEDLINE | ID: covidwho-2239282

ABSTRACT

Citizen science can serve as a tool to obtain information about changes in the soundscape. One of the challenges of citizen science projects is the processing of data gathered by the citizens, to obtain conclusions. As part of the project Sons al Balcó, authors aim to study the soundscape in Catalonia during the lockdown due to the COVID-19 pandemic and afterwards and design a tool to automatically detect sound events as a first step to assess the quality of the soundscape. This paper details and compares the acoustic samples of the two collecting campaigns of the Sons al Balcó project. While the 2020 campaign obtained 365 videos, the 2021 campaign obtained 237. Later, a convolutional neural network is trained to automatically detect and classify acoustic events even if they occur simultaneously. Event based macro F1-score tops 50% for both campaigns for the most prevalent noise sources. However, results suggest that not all the categories are equally detected: the percentage of prevalence of an event in the dataset and its foregound-to-background ratio play a decisive role.


Subject(s)
COVID-19 , Citizen Science , Humans , Pandemics , Communicable Disease Control , Acoustics
8.
J Acoust Soc Am ; 153(1): 573, 2023 01.
Article in English | MEDLINE | ID: covidwho-2232789

ABSTRACT

The COVID-19 pandemic has been a global event affecting all aspects of human life and society, including acoustic aspects. In this Special Issue on COVID-19 and acoustics, we present 48 papers discussing the acoustical impacts of the pandemic and how we deal with it. The papers are divided into seven categories which include: physical masking and speech production, speech perception, noise, the underwater soundscape, the urban soundscape, pathogen transmissibility, and medical diagnosis.


Subject(s)
COVID-19 , Speech Perception , Humans , Pandemics , Noise , Acoustics
9.
Sci Rep ; 13(1): 1567, 2023 01 28.
Article in English | MEDLINE | ID: covidwho-2221856

ABSTRACT

In the face of the global pandemic caused by the disease COVID-19, researchers have increasingly turned to simple measures to detect and monitor the presence of the disease in individuals at home. We sought to determine if measures of neuromotor coordination, derived from acoustic time series, as well as phoneme-based and standard acoustic features extracted from recordings of simple speech tasks could aid in detecting the presence of COVID-19. We further hypothesized that these features would aid in characterizing the effect of COVID-19 on speech production systems. A protocol, consisting of a variety of speech tasks, was administered to 12 individuals with COVID-19 and 15 individuals with other viral infections at University Hospital Galway. From these recordings, we extracted a set of acoustic time series representative of speech production subsystems, as well as their univariate statistics. The time series were further utilized to derive correlation-based features, a proxy for speech production motor coordination. We additionally extracted phoneme-based features. These features were used to create machine learning models to distinguish between the COVID-19 positive and other viral infection groups, with respiratory- and laryngeal-based features resulting in the highest performance. Coordination-based features derived from harmonic-to-noise ratio time series from read speech discriminated between the two groups with an area under the ROC curve (AUC) of 0.94. A longitudinal case study of two subjects, one from each group, revealed differences in laryngeal based acoustic features, consistent with observed physiological differences between the two groups. The results from this analysis highlight the promise of using nonintrusive sensing through simple speech recordings for early warning and tracking of COVID-19.


Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , Speech/physiology , Acoustics , Noise , Speech Production Measurement/methods
10.
J Biomed Inform ; 138: 104283, 2023 02.
Article in English | MEDLINE | ID: covidwho-2180119

ABSTRACT

PURPOSE: Recent developments in the field of artificial intelligence and acoustics have made it possible to objectively monitor cough in clinical and ambulatory settings. We hypothesized that time patterns of objectively measured cough in COVID-19 patients could predict clinical prognosis and help rapidly identify patients at high risk of intubation or death. METHODS: One hundred and twenty-three patients hospitalized with COVID-19 were enrolled at University of Florida Health Shands and the Centre Hospitalier de l'Université de Montréal. Patients' cough was continuously monitored digitally along with clinical severity of disease until hospital discharge, intubation, or death. The natural history of cough in hospitalized COVID-19 disease was described and logistic models fitted on cough time patterns were used to predict clinical outcomes. RESULTS: In both cohorts, higher early coughing rates were associated with more favorable clinical outcomes. The transitional cough rate, or maximum cough per hour rate predicting unfavorable outcomes, was 3·40 and the AUC for cough frequency as a predictor of unfavorable outcomes was 0·761. The initial 6 h (0·792) and 24 h (0·719) post-enrolment observation periods confirmed this association and showed similar predictive value. INTERPRETATION: Digital cough monitoring could be used as a prognosis biomarker to predict unfavorable clinical outcomes in COVID-19 disease. With early sampling periods showing good predictive value, this digital biomarker could be combined with clinical and paraclinical evaluation and is well adapted for triaging patients in overwhelmed or resources-limited health programs.


Subject(s)
COVID-19 , Humans , Cough , Artificial Intelligence , Acoustics , Biomarkers
11.
Sensors (Basel) ; 22(23)2022 Dec 06.
Article in English | MEDLINE | ID: covidwho-2163568

ABSTRACT

Coronavirus disease 2019 (COVID-19) has led to countless deaths and widespread global disruptions. Acoustic-based artificial intelligence (AI) tools could provide a simple, scalable, and prompt method to screen for COVID-19 using easily acquirable physiological sounds. These systems have been demonstrated previously and have shown promise but lack robust analysis of their deployment in real-world settings when faced with diverse recording equipment, noise environments, and test subjects. The primary aim of this work is to begin to understand the impacts of these real-world deployment challenges on the system performance. Using Mel-Frequency Cepstral Coefficients (MFCC) and RelAtive SpecTrAl-Perceptual Linear Prediction (RASTA-PLP) features extracted from cough, speech, and breathing sounds in a crowdsourced dataset, we present a baseline classification system that obtains an average receiver operating characteristic area under the curve (AUC-ROC) of 0.77 when discriminating between COVID-19 and non-COVID subjects. The classifier performance is then evaluated on four additional datasets, resulting in performance variations between 0.64 and 0.87 AUC-ROC, depending on the sound type. By analyzing subsets of the available recordings, it is noted that the system performance degrades with certain recording devices, noise contamination, and with symptom status. Furthermore, performance degrades when a uniform classification threshold from the training data is subsequently used across all datasets. However, the system performance is robust to confounding factors, such as gender, age group, and the presence of other respiratory conditions. Finally, when analyzing multiple speech recordings from the same subjects, the system achieves promising performance with an AUC-ROC of 0.78, though the classification does appear to be impacted by natural speech variations. Overall, the proposed system, and by extension other acoustic-based diagnostic aids in the literature, could provide comparable accuracy to rapid antigen testing but significant deployment challenges need to be understood and addressed prior to clinical use.


Subject(s)
Artificial Intelligence , COVID-19 , Humans , COVID-19/diagnosis , Acoustics , Sound , Respiratory Sounds
12.
J Acoust Soc Am ; 152(5): 3102, 2022 Nov.
Article in English | MEDLINE | ID: covidwho-2137345

ABSTRACT

A substantial fraction of the membership of the Acoustical Society of America are faculty at various types of educational institutions and are actively engaged in educational activities. However, papers focusing on aspects of teaching, pedagogy, demonstrations, student learning, and other education topics are not often published in JASA, even though the Education in Acoustics Committee regularly offers special sessions on these topics at every ASA meeting. This special issue of JASA dedicated to Education in Acoustics includes 41 papers from authors all over the world. This introduction to the special issue briefly describes each of the papers, which have been organized into several broad categories: teaching methods and exercises; project-based learning; use of experiments, demos, and experiential learning; adapting to teaching during COVID-19; circuit models and impedance concepts; software apps and online resources; teaching musical acoustics; and descriptions of acoustics programs at a variety of institutions.


Subject(s)
COVID-19 , Humans , Acoustics , Schools , Problem-Based Learning , Electric Impedance
13.
J Acoust Soc Am ; 152(5): 2570, 2022 Nov.
Article in English | MEDLINE | ID: covidwho-2137344

ABSTRACT

This work presents the results of a perception-based study of changes in the local soundscape at residences across India during the last 2 years of the COVID-19 pandemic and their effects on well-being, productivity during work from home (WFH), online education, anxiety, and noise sensitivity. Using emails and social media platforms, an online cross-sectional survey was conducted involving 942 participants. The responses showed that a greater percentage of participants felt that the indoor environment was noisier during the 2020 lockdown, which was attributed to increased home-entertainment usage, video-calling, and family interaction. The outdoor soundscape was much quieter during the 2020 lockdown due to drastically reduced traffic and commercial activities; however, during the 2021 lockdown, it was perceived to be comparable with pre-COVID times. While changes in indoor soundscape were shown to affect peace, happiness, and concentration while increasing annoyance, the reduction in outdoor noise positively impacted these aspects. The responses indicate that indoor soundscape changes adversely affected productivity and online education. Consequently, only 15% of participants now prefer the WFH model, while 62% have reservations about online education. In some cases, the responses demonstrate a significant influence of demography and suggest the improvement of the acoustic design of residences to support work.


Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Cross-Sectional Studies , Pandemics , Communicable Disease Control , Acoustics , India/epidemiology , Perception
14.
J Acoust Soc Am ; 152(2): 1192, 2022 08.
Article in English | MEDLINE | ID: covidwho-2070540

ABSTRACT

The SARS-CoV-2 pandemic drastically changed daily life. Lockdown measures resulted in reduced traffic mobility and, subsequently, a changed acoustic environment. The exceptional lockdown was used to analyze its impact on the urban acoustic environment using ecoacoustic indices. Using data from 22 automated sound recording devices located in 9 land use categories (LUCs) in Bochum, Germany, the normalized difference soundscape index (NDSI) and Bioacoustics index (BIO) were explored. The NDSI quantifies the proportion of anthropophonic to biophonic sounds, and BIO quantifies the total sound activities of biological sources. The mean differences and standard deviation (SD) were calculated 5 weeks before and 5 weeks during the first lockdown. Pronounced peaks for the NDSI and BIO before lockdown that diminished markedly during lockdown were observed, however, with distinct differences in terms of the LUC. The mean NDSI increased from 0.00 (SD = 0.43) to 0.15 (SD = 0.50), the mean BIO decreased from 4.74 (SD = 2.64) to 4.03 (SD = 2.66). Using the NDSI and BIO together reveals that changes of the acoustic environment during lockdown are mainly driven by decreased anthropophonic sound sources. These results suggest that further studies are needed to tailor ecoacoustic indices more accurately to conditions of the urban environment.


Subject(s)
COVID-19 , SARS-CoV-2 , Acoustics , COVID-19/epidemiology , Communicable Disease Control , Humans , Pandemics
15.
Int J Environ Res Public Health ; 19(19)2022 Sep 27.
Article in English | MEDLINE | ID: covidwho-2065936

ABSTRACT

The circular economy paradigm can be beneficial for urban sustainability by eliminating waste and pollution, by circulating products and materials and by regenerating nature. Furthermore, under an urban circular development scheme, environmental noise can be designed out. The current noise control policies and actions, undertaken at a source-medium-receiver level, present a linearity with minimum sustainability co-benefits. A circular approach in noise control strategies and in soundscape design could offer numerous ecologically related co-benefits. The global literature documenting the advantages of the implementation of circular economy in cities has highlighted noise mitigation as a given benefit. Research involving circular economy actions such as urban green infrastructure, green walls, sustainable mobility systems and electro-mobility has acknowledged reduced noise levels as a major circularity outcome. In this research paper, we highlight the necessity of a circularity and bioeconomy approach in noise control. To this end, a preliminary experimental noise modeling study was conducted to showcase the acoustic benefits of green walls and electric vehicles in a medium-sized urban area of a Mediterranean island. The results indicate a noise level reduction at 4 dB(A) when simulating the introduction of urban circular development actions.


Subject(s)
Sound , Sustainable Growth , Acoustics , Cities , Noise/prevention & control
16.
Sensors (Basel) ; 22(17)2022 Sep 02.
Article in English | MEDLINE | ID: covidwho-2024052

ABSTRACT

Deep learning techniques such as convolutional neural networks (CNN) have been successfully applied to identify pathological voices. However, the major disadvantage of using these advanced models is the lack of interpretability in explaining the predicted outcomes. This drawback further introduces a bottleneck for promoting the classification or detection of voice-disorder systems, especially in this pandemic period. In this paper, we proposed using a series of learnable sinc functions to replace the very first layer of a commonly used CNN to develop an explainable SincNet system for classifying or detecting pathological voices. The applied sinc filters, a front-end signal processor in SincNet, are critical for constructing the meaningful layer and are directly used to extract the acoustic features for following networks to generate high-level voice information. We conducted our tests on three different Far Eastern Memorial Hospital voice datasets. From our evaluations, the proposed approach achieves the highest 7%-accuracy and 9%-sensitivity improvements from conventional methods and thus demonstrates superior performance in predicting input pathological waveforms of the SincNet system. More importantly, we intended to give possible explanations between the system output and the first-layer extracted speech features based on our evaluated results.


Subject(s)
Voice Disorders , Voice , Acoustics , Humans , Neural Networks, Computer , Voice Disorders/diagnosis
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 998-1001, 2022 07.
Article in English | MEDLINE | ID: covidwho-2018736

ABSTRACT

This work focuses on the automatic detection of COVID-19 from the analysis of vocal sounds, including sustained vowels, coughs, and speech while reading a short text. Specifically, we use the Mel-spectrogram representations of these acoustic signals to train neural network-based models for the task at hand. The extraction of deep learnt representations from the Mel-spectrograms is performed with Convolutional Neural Networks (CNNs). In an attempt to guide the training of the embedded representations towards more separable and robust inter-class representations, we explore the use of a triplet loss function. The experiments performed are conducted using the Your Voice Counts dataset, a new dataset containing German speakers collected using smartphones. The results obtained support the suitability of using triplet loss-based models to detect COVID-19 from vocal sounds. The best Unweighted Average Recall (UAR) of 66.5 % is obtained using a triplet loss-based model exploiting vocal sounds recorded while reading.


Subject(s)
COVID-19 , Voice , Acoustics , COVID-19/diagnosis , Humans , Neural Networks, Computer , Speech
18.
Sci Total Environ ; 844: 157223, 2022 Oct 20.
Article in English | MEDLINE | ID: covidwho-1996546

ABSTRACT

The current prolonged coronavirus disease (COVID-19) pandemic has substantially influenced numerous facets of our daily lives for over two years. Although a number of studies have explored the pandemic impacts on soundscapes worldwide, their works have not been reviewed comprehensively nor systematically, hence a lack of prospective soundscape goals based upon global evidence. This review study examines evidence of the COVID-19 crisis impacts on soundscapes and quantifies the prevalence of unprecedented changes in acoustic environments. Two key-research classes were identified based on a systematic content analysis of the 119 included studies: (1) auditory perceptual change and (2) noise level change due to the COVID-19 pandemic/lockdown. Our qualitative synthesis ascertained the substantial adverse consequences of pandemic soundscapes on human health and well-being while beneficial aspects of the COVID-19 pandemic on soundscapes were yet identified. Furthermore, meta-analysis results highlight that the observed average noise-level reduction (148 averaged samples derived from 31 studies) varied as a function of the stringency level of the COVID-19 confinement policies imposed by the governments, which would be further moderated by urban morphology and main noise sources. Given these collective findings, we propose soundscape materiality, its nexus with related the United Nations' sustainable development goals (SDGs), and prospective approaches to support resilient soundscapes during and after the pandemic, which should be achieved to enhance healthy living and human well-being.


Subject(s)
COVID-19 , Pandemics , Acoustics , COVID-19/epidemiology , Communicable Disease Control , Humans , Noise
19.
Sensors (Basel) ; 22(15)2022 Aug 08.
Article in English | MEDLINE | ID: covidwho-1994141

ABSTRACT

The development of MEMS acoustic resonators meets the increasing demand for in situ detection with a higher performance and smaller size. In this paper, a lithium niobate film-based S1 mode Lamb wave resonator (HF-LWR) for high-sensitivity gravimetric biosensing is proposed. The fabricated resonators, based on a 400-nm X-cut lithium niobate film, showed a resonance frequency over 8 GHz. Moreover, a PMMA layer was used as the mass-sensing layer, to study the performance of the biosensors based on HF-LWRs. Through optimizing the thickness of the lithium niobate film and the electrode configuration, the mass sensitivity of the biosensor could reach up to 74,000 Hz/(ng/cm2), and the maximum value of figure of merit (FOM) was 5.52 × 107, which shows great potential for pushing the performance boundaries of gravimetric-sensitive acoustic biosensors.


Subject(s)
Acoustics , Biosensing Techniques , Electrodes , Equipment Design , Vibration
20.
J Acoust Soc Am ; 152(1): 43, 2022 07.
Article in English | MEDLINE | ID: covidwho-1949893

ABSTRACT

Hands-on, project-based learning was difficult to achieve in online classes during the COVID-19 pandemic. The Engineering Experimentation course at Cooper Union teaches third-year mechanical engineering students practical experimental skills to measure physical phenomenon, which typically requires in-person laboratory classes. In response to COVID, a low-cost, at-home laboratory kit was devised to give students tools to conduct experiments. The kit included a microcontroller acting as a data-acquisition device and custom software to facilitate data transfer. A speed of sound laboratory was designed with the kit to teach skills in data collection, signal processing, and error analysis. The students derived the sound speed by placing two microphones a known distance apart and measuring the time for an impulsive signal to travel from one to the other. The students reported sound speeds from 180.7-477.8 m/s in a temperature range from 273.7-315.9 K. While these reported speeds contained a large amount of error, the exercise allowed the students to learn how to account for sources of error within experiments. This paper also presents final projects designed by the students at home, an impedance tube and two Doppler shift experiments, that exhibit successful and effective low-cost solutions to demonstrate and measure acoustic phenomenon.


Subject(s)
COVID-19 , Laboratories , Acoustics , COVID-19/epidemiology , Humans , Pandemics , Students
SELECTION OF CITATIONS
SEARCH DETAIL